memory module
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.93)
Turing Completeness of Bounded-Precision Recurrent Neural Networks
Previous works have proved that recurrent neural networks (RNNs) are Turing-complete. However, in the proofs, the RNNs allow for neurons with unbounded precision, which is neither practical in implementation nor biologically plausible. To remove this assumption, we propose a dynamically growing memory module made of neurons of fixed precision. The memory module dynamically recruits new neurons when more memories are needed, and releases them when memories become irrelevant. We prove that a 54-neuron bounded-precision RNN with growing memory modules can simulate a Universal Turing Machine, with time complexity linear in the simulated machine's time and independent of the memory size. The result is extendable to various other stack-augmented RNNs. Furthermore, we analyze the Turing completeness of both unbounded-precision and bounded-precision RNNs, revisiting and extending the theoretical foundations of RNNs.
CogniPair: From LLM Chatbots to Conscious AI Agents -- GNWT-Based Multi-Agent Digital Twins for Social Pairing -- Dating & Hiring Applications
Ye, Wanghao, Chen, Sihan, Wang, Yiting, He, Shwai, Tian, Bowei, Sun, Guoheng, Wang, Ziyi, Wang, Ziyao, He, Yexiao, Shen, Zheyu, Liu, Meng, Zhang, Yuning, Feng, Meng, Wang, Yang, Peng, Siyuan, Dai, Yilong, Duan, Zhenle, Xiong, Lang, Liu, Joshua, Qin, Hanzhang, Li, Ang
Current large language model (LLM) agents lack authentic human psychological processes necessary for genuine digital twins and social AI applications. To address this limitation, we present a computational implementation of Global Workspace Theory (GNWT) that integrates human cognitive architecture principles into LLM agents, creating specialized sub-agents for emotion, memory, social norms, planning, and goal-tracking coordinated through a global workspace mechanism. However, authentic digital twins require accurate personality initialization. We therefore develop a novel adventure-based personality test that evaluates true personality through behavioral choices within interactive scenarios, bypassing self-presentation bias found in traditional assessments. Building on these innovations, our CogniPair platform enables digital twins to engage in realistic simulated dating interactions and job interviews before real encounters, providing bidirectional cultural fit assessment for both romantic compatibility and workplace matching. Validation using 551 GNWT-Agents and Columbia University Speed Dating dataset demonstrates 72% correlation with human attraction patterns, 77.8% match prediction accuracy, and 74% agreement in human validation studies. This work advances psychological authenticity in LLM agents and establishes a foundation for intelligent dating platforms and HR technology solutions.
- North America > United States > California (0.14)
- Europe > Kosovo > District of Gjilan > Kamenica (0.04)
- North America > United States > Maryland > Prince George's County > College Park (0.04)
- (2 more...)
- Research Report > Promising Solution (0.92)
- Research Report > New Finding (0.67)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.93)
- Education (0.67)
- Health & Medicine > Consumer Health (0.67)
Variational Memory Addressing in Generative Models
Aiming to augment generative models with external memory, we interpret the output of a memory module with stochastic addressing as a conditional mixture distribution, where a read operation corresponds to sampling a discrete memory address and retrieving the corresponding content from memory. This perspective allows us to apply variational inference to memory addressing, which enables effective training of the memory module by using the target information to guide memory lookups. Stochastic addressing is particularly well-suited for generative models as it naturally encourages multimodality which is a prominent aspect of most high-dimensional datasets. Treating the chosen address as a latent variable also allows us to quantify the amount of information gained with a memory lookup and measure the contribution of the memory module to the generative process. To illustrate the advantages of this approach we incorporate it into a variational autoencoder and apply the resulting model to the task of generative few-shot learning. The intuition behind this architecture is that the memory module can pick a relevant template from memory and the continuous part of the model can concentrate on modeling remaining variations. We demonstrate empirically that our model is able to identify and access the relevant memory contents even with hundreds of unseen Omniglot characters in memory.
182bd81ea25270b7d1c2fe8353d17fe6-AuthorFeedback.pdf
We thank the reviewers for their time and helpful feedback. Below we respond to their comments in turn. 'permutation invariance in write values' Y es, we process the values in parallel and then take the sum over the batch We will make this clearer in the updated manuscript. We agree this discussion would be useful. 'Error bars for the plots in Figure 2.' We will conduct multiple runs and include error bars in Figure 2. 'Add titles to each subgraph in Figure 5.' Indeed, we should have done this.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Heilongjiang Province > Harbin (0.04)
- (5 more...)
- Health & Medicine > Consumer Health (0.89)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)
- North America > United States (0.28)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Health & Medicine > Consumer Health (0.43)
- Education > Curriculum > Subject-Specific Education (0.41)
Supplementary for Turing Completeness of Bounded-Precision Recurrent Neural Networks Stephen Chung
Turing Machine is moving right. The proof is similar to that of Theorem 1 but with more neurons. The general idea of the proof is that the required update can be constructed as a two-step process. In the first step, we apply the equations used in the proof of Theorem 1 for neurons from 1. to 6. Therefore, the equations for neurons from 1. to 6. are The update equations for 1. to 6. are the same as that in the proof of Theorem 1, except: 4. T ape neurons. The proof is similar to that of Theorem 2 but with more neurons.